News

Generative AI: CHATGPT, AI ACT and GDPR



Generative AI: ChatGPT, EU, AI-ACT and GDPR

Since April 2021, consultations have been taking place in the European Union on the AI ACT, at the end of which the European legal framework for innovation with AI application is to be established. Now in spring 2023 the EU Parliament has agreed on adjustments for generative AI.

ChatGPT demonstrates in practice that AI technology develops faster than the legislator acts. Initially, generative AI applications were not even considered in the European legal framework.
This has now been remedied, and last Friday, 28 April 2023, Members of the European Parliament (MEPs) reached an agreement on the EU law on artificial intelligence - with some adjustments regarding generative AI such as ChatGPT. ChatGPT was classified as "not high risk".

The EU Parliament's agreement contains the following regarding generative AI applications:

  • The same rules apply to generative systems as to other self-learning systems.

  • Generative AI systems should not be classified as such as "high-risk".

  • Whether a generative AI system is classified as "high-risk" depends on its application.

  • If an application poses a direct or indirect threat to life, if it can discriminate against people, if an AI can endanger the rule of law or democracy - these are all factors that make AI applications "high-risk" from the EU's perspective.

    But, the EU Parliament agreement imposes a number of requirements on generative systems like ChatGPT, even if they are not "high-risk".

    Requirements on "not high-risk" generative AI systems


    A new Article 28b will require developers of AI systems to test generative AI systems in advance for the risks they pose and also to ensure data quality. According to the German Frankfurter Allgemeine (FAZ), Article 28b says that producers must explain roughly to what extent they have used protected data for training.

    In addition, developers of generative systems are to be obliged to include remedial measures for imminent dangers, including defences against cyber attacks and misuse for discrimination.

    Requirements on High-risk AI applications


    According to the EU Commission's draft AI ACT, Annex III so far provides for eight areas of so-called high-risk AI systems. These mainly concern biometric identification, the categorisation of persons, AI systems related to professional training or assessment, and the management and operation of critical infrastructure. Autonomous vehicles and medical devices are also among the AI systems classified as high-risk, depending of the application of the AI.

    Developers of high-risk AI applications have obligations for pre-testing, risk management and human oversight. There are also requirements related to data, data governance, documentation and record keeping, transparency and provision of information to users, robustness and accuracy. Logging is intended to ensure that the functioning of the AI system is traceable throughout its lifecycle to an extent appropriate to the purpose of the system.

    What is the next step for the AI ACT and Article 28b decision?


    The EU Parliament's agreement will be put to a vote in committee on 11 May 2023.
    It will be voted on in plenary in June, after which discussions with Member States could start.
    The current agreement has put the AI ACT decision on track, with Article 28b. But, there is still room for further compromises until the agreement reached now is actually implemented in EU legislation.

    ChatGPT and EU data protection


    At the end of April 2023, on 22 April 2023, in Germany an audit of ChatGPT from a data protection perspective was initiated.

    The German regional data protection authorities initiated administrative proceedings against OpenAI, the developers of ChatGPT. The data protection authorities want to have it examined whether the use of the data and the algorithm of ChatGPT is compliant with the EU's General Data Protection Regulation (GDPR), especially with regard to the data generated in the course of use. Open questions include:

    Are possibilities for information, correction or deletion respected?
    Are there mechanisms for the protection of minors and data of underage users?

    OpenAI has a few weeks to answer a multi-page questionnaire from the German data protection authorities. The result will be interesting, because in our experience many developers of AI applications ask themselves the question about data protection, not only when developing generative AI.

    Do you have questions about innovation or an invention with an AI application? Our patent law firm Köllner & Partner offers a high qualified team and expertise in this area.

    Please contact us, by phone at +49 69 69 59 60-0 or info@kollner.eu.



    More News

    Your benefits of our work

    Next

    Contact

    We appreciate personal contact. Please do not hesitate to get in contact by phone or e-mail.

    Phone: +49 (0)69 69 59 60-0
    Telefax: +49 (0)69 69 59 60-22
    e-mail: info@kollner.eu

    You will find us in the Vogelweidstrasse 8 in 60596 Frankfurt am Main